{"timestamp":"2022-02-17T14:35:56.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:35:58.605-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:03.084-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:05.484-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:08.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:10.931-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:12.950-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:14.972-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:17.505-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:19.881-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:22.241-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:24.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:26.578-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:28.611-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:30.491-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:32.871-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:35.159-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:37.417-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:39.886-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:41.889-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:44.118-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:46.365-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:48.523-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:50.981-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:53.217-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:55.765-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:36:58.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:00.278-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:02.969-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:05.471-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:08.195-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:10.759-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:14.483-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:17.320-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:19.374-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:21.383-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:23.430-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:25.295-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:27.199-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:29.071-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:30.968-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:33.011-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:34.939-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:36.932-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:39.256-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:41.196-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:43.030-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:44.949-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:46.762-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:48.937-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:51.356-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:53.578-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:55.434-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:57.514-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:37:59.856-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:02.142-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:04.146-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:06.358-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:08.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:10.428-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:12.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:14.482-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:16.517-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:18.601-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:20.962-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:23.121-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:25.226-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:27.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:29.672-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:31.814-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:34.056-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:36.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:38.408-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:40.795-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:42.748-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:44.969-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:47.149-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:49.398-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:51.691-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:53.657-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:55.626-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:57.666-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:38:59.852-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:02.145-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:04.347-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:06.627-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:08.820-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:11.158-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:13.538-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:17.532-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:19.727-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:21.972-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:25.245-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:29.739-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:31.841-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:33.780-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:35.980-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:37.921-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:39.992-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:42.012-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:43.936-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:46.006-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:47.918-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:50.434-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:52.514-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:54.711-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:56.838-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:39:59.023-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:01.388-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:04.682-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:07.007-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:09.425-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:11.542-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:13.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:16.515-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:18.918-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:21.131-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:23.481-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:26.798-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:29.404-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:32.007-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:34.130-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:37.070-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:39.463-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:42.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:44.371-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:47.322-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:49.312-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:51.288-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:53.604-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:55.822-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:58.006-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:40:59.828-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:02.519-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:04.427-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:06.775-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:08.854-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:10.897-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:12.947-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:14.943-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:16.845-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:18.803-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:20.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:22.727-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:25.047-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:26.980-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:29.033-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:31.216-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:33.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:35.022-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:36.962-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:39.147-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:41.220-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:43.016-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:45.014-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:47.478-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:49.502-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:51.775-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:53.996-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:56.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:41:58.209-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:00.544-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:02.930-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:05.032-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:07.171-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:08.975-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:10.936-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:12.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:14.970-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:16.865-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:19.445-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:21.469-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:23.279-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:25.233-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:27.110-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:29.071-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:30.986-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:32.922-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:34.808-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:36.768-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:38.660-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:40.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:42.641-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:44.610-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:46.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:48.887-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:50.951-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:53.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:55.291-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:57.600-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:42:59.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:01.727-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:03.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:05.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:07.628-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:09.720-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:11.746-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:13.620-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:15.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:17.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:19.332-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:21.291-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:23.222-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:25.354-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:27.308-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:29.262-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:31.214-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:33.328-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:35.277-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:37.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:40.067-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:42.110-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:44.318-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:46.585-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:48.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:51.350-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:53.719-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:56.514-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:43:58.892-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:01.938-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:03.648-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:06.169-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:08.109-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:11.533-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:15.766-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:18.138-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:20.795-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:23.047-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:25.063-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:27.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:29.445-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:31.693-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:34.728-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:37.008-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:39.138-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:41.436-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:44.480-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:46.717-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:48.772-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:52.277-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:54.758-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:56.903-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:44:59.814-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:02.910-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:04.963-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:07.078-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:08.976-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:11.013-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:13.168-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:15.130-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:17.102-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:18.975-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:21.374-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:23.417-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:25.775-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:27.835-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:29.978-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:31.947-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:33.859-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:35.745-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:37.644-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:39.507-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:41.346-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:43.336-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:45.407-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:47.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:49.409-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:51.364-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:53.412-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:55.768-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:57.807-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:45:59.999-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:02.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:04.894-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:07.639-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:10.143-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:12.108-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:14.461-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:16.242-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:18.532-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:20.475-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:22.915-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:25.175-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:27.520-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:29.802-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:32.015-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:33.943-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:36.005-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:38.447-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:40.412-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:42.491-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:44.656-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:46.634-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:48.814-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:51.807-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:54.044-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:55.936-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:46:57.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:01.665-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:03.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:06.658-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:08.721-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:10.746-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:12.808-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:15.127-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:17.120-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:19.483-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:21.746-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:23.791-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:25.845-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:27.855-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:29.946-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:32.007-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:33.947-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:36.181-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:39.382-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:41.686-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:43.686-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:45.725-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:47.814-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:49.843-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:52.020-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:54.126-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:56.435-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:47:58.490-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:00.512-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:02.893-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:04.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:06.798-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:08.782-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:10.984-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:12.927-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:14.911-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:16.914-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:19.020-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:20.919-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:23.321-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:25.233-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:27.481-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:29.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:31.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:33.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:36.105-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:38.348-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:41.228-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:43.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:46.969-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:50.348-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:54.393-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:48:57.406-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:01.585-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:04.012-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:07.103-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:10.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:14.285-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:16.488-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:19.542-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:21.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:25.224-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:27.341-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:29.469-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:33.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:35.733-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:38.139-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:40.017-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:43.903-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:46.077-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:49.043-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:51.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:54.160-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:56.794-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:49:59.468-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:05.064-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:08.219-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:10.581-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:13.387-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:16.360-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:18.668-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:20.903-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:23.547-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:26.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:29.081-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:32.936-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:34.862-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:37.485-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:39.636-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:42.507-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:44.836-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:47.304-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:49.523-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:52.212-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:54.541-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:50:57.444-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:00.074-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:02.455-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:05.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:09.100-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:11.370-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:13.602-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:17.072-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:19.543-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:21.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:24.584-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:28.075-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:31.368-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:33.641-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:36.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:38.833-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:42.881-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:45.085-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:47.784-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:51.276-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:53.523-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:55.537-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:57.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:51:59.386-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:02.337-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:04.611-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:06.824-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:09.159-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:11.408-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:13.401-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:15.625-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:18.090-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:20.075-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:22.103-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:24.129-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:26.302-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:28.028-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:30.074-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:31.861-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:33.769-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:35.734-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:37.591-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:39.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:41.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:43.912-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:46.005-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:47.985-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:49.939-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:51.884-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:53.895-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:55.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:52:57.871-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:00.261-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:03.285-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:05.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:08.200-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:12.933-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:15.170-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:17.749-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:21.630-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:23.882-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:26.017-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:28.222-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:30.509-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:33.291-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:35.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:39.744-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:42.139-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:44.576-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:46.878-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:48.786-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:50.979-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:53.736-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:56.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:53:58.490-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:00.530-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:02.822-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:05.056-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:07.322-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:09.657-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:11.931-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:14.208-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:16.358-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:20.527-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:22.543-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:24.613-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:26.856-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:29.395-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:31.838-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:33.968-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:36.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:38.933-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:42.603-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:45.809-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:47.923-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:50.159-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:52.434-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:55.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:57.718-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:54:59.907-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:03.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:05.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:07.205-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:09.702-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:12.340-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:15.559-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:17.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:20.431-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:23.096-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:26.347-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:29.673-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:33.335-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:35.878-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:38.517-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:41.946-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:44.622-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:46.793-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:50.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:52.525-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:54.720-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:57.559-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:55:59.753-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:03.796-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:06.837-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:08.601-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:10.572-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:12.644-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:14.665-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:16.626-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:18.548-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:20.490-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:22.491-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:24.584-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:26.381-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:28.314-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:30.309-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:32.312-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:34.442-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:36.895-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:38.955-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:41.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:43.475-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:45.274-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:47.310-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:49.049-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:51.110-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:52.990-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:54.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:57.272-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:56:59.450-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:03.369-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:05.801-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:08.341-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:10.398-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:12.735-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:15.020-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:18.016-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:20.686-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:23.178-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:25.699-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:28.314-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:32.193-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:34.850-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:37.079-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:39.282-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:41.821-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:44.494-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:47.110-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:49.413-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:51.947-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:54.160-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:56.328-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:57:58.644-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:01.476-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:03.630-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:06.001-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:08.225-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:12.047-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:14.164-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:16.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:18.418-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:20.562-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:22.801-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:24.894-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:27.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:29.194-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:31.412-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:33.594-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:35.404-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:37.469-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:39.520-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:41.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:43.470-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:45.325-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:47.313-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:49.886-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:52.277-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:54.853-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:56.980-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:58:58.795-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:00.828-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:03.316-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:05.262-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:07.287-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:09.245-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:11.146-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:13.132-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:15.227-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:17.174-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:19.179-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:21.341-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:23.224-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:25.168-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:27.062-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:28.866-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:30.990-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:32.915-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:34.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:36.786-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:38.883-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:40.837-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:42.893-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:44.682-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:46.624-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:48.599-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:50.525-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:52.836-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:55.239-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:57.532-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T14:59:59.911-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:01.307-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"ThreadPoolTaskScheduler-1","class":"org.camunda.optimize.service.identity.UserTaskIdentityCacheService","method":"syncIdentitiesWithRetry","message":"Engine assignee/candidateGroup identity sync complete","line":"114"} {"timestamp":"2022-02-17T15:00:03.450-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:06.181-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:09.536-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:11.690-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:13.637-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:15.867-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:17.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:20.154-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:22.344-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:24.733-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:27.142-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:29.441-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:31.687-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:34.704-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:37.500-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:40.168-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:42.894-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:45.040-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:48.446-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:50.685-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:54.492-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:56.760-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:00:59.035-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:01.264-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:04.047-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:06.485-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:10.197-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:14.230-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:17.012-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:19.019-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:22.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:24.275-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:26.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:29.410-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:33.220-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:35.628-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:38.324-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:41.249-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:44.396-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:46.662-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:48.808-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:51.618-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:53.817-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:57.229-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:01:59.273-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:01.290-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:03.492-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:05.739-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:07.658-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:09.847-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:11.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:13.966-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:16.160-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:18.313-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:20.591-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:22.647-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:24.579-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:26.581-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:28.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:30.472-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:32.338-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:34.354-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:36.215-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:38.205-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:40.235-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:42.363-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:44.247-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:46.238-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:48.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:50.601-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:52.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:54.426-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:56.579-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:02:58.568-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:00.670-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:03.074-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:05.013-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:07.064-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:09.133-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:11.257-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:13.208-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:15.135-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:17.162-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:19.167-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:21.262-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:23.299-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:25.222-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:27.221-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:29.145-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:31.178-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:33.147-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:35.053-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:37.473-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:39.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:41.754-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:43.681-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:45.736-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:47.792-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:49.714-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:51.664-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:53.887-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:56.092-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:03:58.865-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:01.244-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:03.471-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:06.152-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:08.141-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:10.429-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:12.517-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:15.094-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:17.557-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:19.854-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:23.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:26.010-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:28.242-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:32.467-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:35.381-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:37.872-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:40.024-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:42.537-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:45.718-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:48.057-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:52.083-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:54.560-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:56.720-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:04:58.602-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:00.775-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:05.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:07.887-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:11.087-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:14.343-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:16.414-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:18.629-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:20.939-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:23.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:25.744-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:28.394-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:31.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:34.483-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:36.458-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:38.710-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:40.754-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:42.819-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:45.376-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:47.897-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:52.402-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:54.563-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:56.920-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:05:58.906-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:01.633-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:04.195-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:07.389-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:11.670-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:14.173-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:16.769-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:19.270-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:21.683-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:24.177-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:26.626-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:28.688-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:31.302-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:33.908-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:36.111-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:38.515-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:41.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:45.232-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:47.348-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:50.762-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:53.243-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:57.638-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:06:59.694-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:02.633-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:04.902-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:08.620-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:10.664-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:13.043-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:15.642-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:17.748-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:20.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:22.225-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:24.321-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:26.439-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:29.951-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:33.513-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:35.676-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:38.431-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:40.303-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:42.764-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:44.886-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:47.101-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:49.436-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:51.958-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:54.204-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:56.237-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:07:58.383-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:00.883-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:03.353-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:05.588-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:08.051-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:10.289-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:14.156-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:17.246-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:19.489-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:22.423-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:24.718-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:26.974-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:29.801-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:32.021-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:34.220-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:36.151-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:39.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:43.138-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:46.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:49.063-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:51.368-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:53.591-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:55.660-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:57.624-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:08:59.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:02.201-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:04.399-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:06.682-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:08.529-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:10.520-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:12.593-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:14.719-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:16.826-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:18.977-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:21.045-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:23.031-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:24.912-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:27.074-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:29.039-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:31.045-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:33.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:35.066-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:37.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:38.918-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:40.940-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:42.858-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:44.959-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:46.990-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:49.103-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:51.414-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:53.809-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:55.930-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:09:58.179-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:00.097-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:02.875-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:04.835-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:06.841-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:08.865-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:10.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:12.750-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:14.765-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:16.841-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:18.745-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:21.220-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:23.318-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:25.522-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:27.424-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:29.407-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:31.514-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:33.857-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:36.029-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:38.295-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:40.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:42.842-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:45.266-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:47.608-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:49.940-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:52.142-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:54.129-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:56.641-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:10:58.735-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:00.782-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:03.220-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:05.612-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:07.890-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:09.958-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:12.231-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:14.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:16.726-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:18.865-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:20.948-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:23.240-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:25.346-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:27.219-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:29.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:31.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:33.861-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:36.203-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:38.451-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:40.676-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:42.714-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:45.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:47.390-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:49.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:51.598-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:53.835-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:55.768-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:11:58.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:00.355-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:02.623-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:04.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:06.927-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:09.006-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:10.961-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:13.192-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:15.257-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:17.171-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:19.172-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:21.168-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:23.258-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:25.624-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:28.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:30.375-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:32.734-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:34.651-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:36.610-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:38.724-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:40.915-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:43.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:45.102-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:47.185-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:49.267-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:51.319-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:53.327-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:55.462-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:57.565-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:12:59.474-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:01.685-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:04.032-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:06.476-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:08.743-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:10.940-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:13.302-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:15.520-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:17.863-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:20.045-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:22.529-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:24.549-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:26.833-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:28.999-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:31.163-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:33.458-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:35.594-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:37.773-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:39.717-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:42.243-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:44.692-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:47.337-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:49.459-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:51.807-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:54.284-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:13:57.946-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:01.895-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:03.868-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:06.384-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:08.549-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:10.674-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:12.753-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:14.824-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:16.795-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:18.750-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:20.627-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:22.802-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:24.768-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:26.848-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:28.803-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:31.104-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:33.336-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:35.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:37.231-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:39.313-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:41.343-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:43.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:45.476-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:47.845-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:50.504-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:52.602-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:54.783-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:56.906-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:14:58.831-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:01.114-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:03.597-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:05.425-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:07.426-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:09.315-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:11.180-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:13.200-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:15.300-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:17.405-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:19.294-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:21.210-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:23.650-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:25.699-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:27.679-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:29.779-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:32.131-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:34.341-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:36.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:38.876-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:41.051-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:43.102-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:45.404-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:47.461-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:49.774-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:51.941-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:54.109-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:56.263-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:15:58.339-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:00.393-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:02.962-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:05.296-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:07.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:10.023-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:12.566-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:14.729-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:17.687-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:19.900-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:22.278-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:24.517-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:26.713-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:28.897-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:31.395-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:33.790-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:36.329-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:38.857-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:41.304-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:44.274-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:46.396-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:49.780-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:51.944-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:54.133-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:56.380-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:16:58.845-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:00.954-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:03.351-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:05.550-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:07.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:09.600-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:11.824-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:13.894-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:16.131-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:18.301-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:20.456-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:22.816-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:25.195-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:27.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:30.910-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:33.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:37.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:39.163-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:43.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:45.761-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:48.027-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:50.678-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:53.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:56.028-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:17:58.093-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:00.350-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:02.851-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:04.956-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:07.448-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:09.733-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:12.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:14.343-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:16.825-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:19.101-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:21.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:23.385-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:25.586-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:27.688-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:29.735-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:31.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:33.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:35.834-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:38.250-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:40.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:43.002-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:45.041-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:47.469-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:49.569-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:51.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:53.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:55.501-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:57.885-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:18:59.737-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:01.793-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:03.775-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:05.723-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:07.939-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:09.674-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:11.613-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:13.578-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:15.722-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:17.647-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:19.927-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:22.146-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:24.468-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:26.533-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:28.719-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:30.920-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:33.383-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:35.648-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:37.893-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:40.014-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:42.021-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:44.022-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:45.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:48.215-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:50.145-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:52.104-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:54.070-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:56.015-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:19:58.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:00.093-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:03.294-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:05.493-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:07.559-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:09.740-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:11.990-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:14.101-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:16.535-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:18.979-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:21.547-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:24.640-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:28.678-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:31.413-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:34.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:37.666-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:40.216-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:43.105-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:45.360-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:47.588-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:50.118-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:52.408-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:54.878-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:57.195-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:20:59.222-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:01.476-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:03.517-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:06.035-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:08.201-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:11.491-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:14.213-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:16.253-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:19.141-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:23.076-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:25.276-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:27.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:30.905-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:34.000-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:36.087-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:38.222-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:42.597-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:44.538-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:46.775-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:49.287-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:51.519-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:53.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:55.548-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:57.696-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:21:59.606-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:01.780-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:03.909-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:05.889-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:07.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:10.062-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:11.957-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:14.002-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:16.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:18.211-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:20.268-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:22.385-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:24.380-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:26.462-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:28.638-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:30.702-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:32.748-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:34.620-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:36.856-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:38.870-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:40.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:43.105-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:45.489-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:47.811-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:49.953-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:52.416-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:54.801-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:56.913-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:22:58.898-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:00.835-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:03.196-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:05.022-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:07.315-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:09.427-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:11.670-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:13.895-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:16.184-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:18.591-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:21.030-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:25.423-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:28.554-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:31.039-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:33.894-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:37.999-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:39.997-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:42.510-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:44.734-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:47.035-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:49.246-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:51.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:54.126-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:57.602-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:23:59.909-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:02.919-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:05.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:08.696-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:13.592-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:15.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:18.063-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:21.274-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:23.818-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:27.894-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:30.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:34.188-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:36.639-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:39.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:41.563-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:44.237-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:48.078-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:50.862-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:53.423-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:56.082-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:24:58.128-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:01.032-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:05.802-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:08.371-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:10.855-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:13.467-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:17.552-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:19.779-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:21.938-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:24.497-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:27.250-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:29.635-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:31.536-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:33.849-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:36.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:39.673-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:42.058-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:44.325-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:46.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:49.782-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:52.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:54.920-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:25:57.712-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:00.278-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:05.149-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:08.688-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:12.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:15.278-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:17.592-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:19.678-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:21.975-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:24.629-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:27.048-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:29.072-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:31.126-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:33.692-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:36.264-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:38.467-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:40.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:43.323-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:45.531-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:47.500-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:49.670-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:51.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:53.623-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:55.552-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:57.719-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:26:59.706-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:02.267-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:04.257-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:06.235-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:08.184-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:10.084-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:12.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:14.169-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:16.039-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:17.991-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:20.165-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:22.228-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:24.171-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:26.260-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:28.120-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:30.109-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:31.858-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:33.780-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:35.803-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:37.656-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:39.746-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:41.669-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:43.651-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:45.780-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:48.010-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:50.372-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:52.834-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:54.988-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:27:57.490-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:01.048-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:03.894-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:06.170-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:08.443-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:10.571-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:13.862-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:18.656-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:21.560-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:24.746-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:26.732-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:29.762-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:31.866-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:34.018-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:37.625-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:40.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:42.154-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:44.547-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:48.156-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:51.572-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:53.652-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:56.112-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:28:58.727-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:02.005-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:04.352-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:06.681-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:08.645-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:11.957-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:14.432-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:16.748-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:19.282-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:22.226-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:24.435-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:26.662-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:28.723-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:31.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:33.604-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:35.673-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:38.024-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:40.215-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:42.316-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:44.263-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:46.192-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:48.414-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:51.047-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:53.050-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:55.221-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:57.369-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:29:59.248-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:01.183-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:05.297-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:07.742-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:10.816-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:12.848-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:14.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:17.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:19.823-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:23.562-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:25.846-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:28.151-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:31.240-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:34.390-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:36.361-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:38.307-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:40.418-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:42.805-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:45.105-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:47.322-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:50.186-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:52.186-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:54.173-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:56.163-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:30:58.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:00.361-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:03.008-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:05.446-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:07.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:10.053-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:12.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:14.146-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:16.409-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:18.690-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:21.034-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:23.330-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:25.382-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:27.421-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:29.658-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:32.160-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:34.405-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:36.601-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:38.562-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:40.954-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:43.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:45.445-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:47.998-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:50.088-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:52.388-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:54.598-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:56.599-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:31:58.680-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:00.601-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:03.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:05.257-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:07.490-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:10.300-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:12.820-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:15.300-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:18.620-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:20.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:23.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:25.405-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:29.291-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:33.237-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:35.330-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:37.599-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:39.695-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:42.363-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:44.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:46.631-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:48.872-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:51.399-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:53.530-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:55.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:32:59.149-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:01.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:03.574-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:05.705-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:08.162-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:10.456-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:12.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:14.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:18.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:20.990-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:23.448-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:26.209-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:28.589-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:31.749-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:34.926-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:37.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:40.107-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:42.558-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:45.637-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:47.533-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:49.832-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:52.202-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:54.518-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:33:57.812-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:00.824-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:04.343-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:07.022-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:11.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:13.647-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:15.779-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:17.548-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:19.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:21.832-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:23.977-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:25.951-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:27.985-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:29.969-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:32.004-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:33.813-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:35.971-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:37.981-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:40.013-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:41.859-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:43.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:45.931-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:47.864-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:50.028-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:52.426-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:54.866-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:34:58.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:00.367-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:03.679-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:05.999-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:08.235-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:11.189-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:14.447-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:16.844-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:20.082-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:22.595-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:26.014-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:29.257-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:31.788-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:34.076-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:36.235-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:39.957-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:43.354-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:46.169-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:48.791-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:51.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:53.180-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:55.366-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:35:59.223-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:01.531-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:03.909-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:06.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:08.518-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:11.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:15.209-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:19.463-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:21.525-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:23.588-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:25.510-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:27.825-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:29.873-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:32.045-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:34.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:36.143-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:38.303-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:40.472-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:42.958-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:45.237-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:47.387-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:49.213-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:51.617-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:53.493-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:55.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:57.329-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:36:59.721-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:02.898-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:05.151-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:07.392-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:09.720-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:11.916-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:14.200-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:16.167-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:18.110-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:20.070-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:22.225-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:24.606-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:26.786-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:28.972-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:31.112-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:33.347-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:35.609-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:37.824-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:39.639-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:41.633-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:43.641-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:45.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:47.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:49.413-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:51.345-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:53.443-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:55.351-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:57.869-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:37:59.719-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:01.989-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:03.938-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:05.854-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:07.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:10.022-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:12.101-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:14.033-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:15.873-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:17.772-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:19.769-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:21.817-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:23.915-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:25.993-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:27.984-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:29.991-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:32.142-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:34.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:36.134-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:38.007-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:39.908-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:42.046-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:44.110-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:46.209-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:48.232-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:50.201-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:52.535-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:54.423-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:56.632-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:38:58.563-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:00.520-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:02.900-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:04.898-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:06.940-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:08.927-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:10.727-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:12.982-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:15.238-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:17.424-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:19.527-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:21.527-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:23.621-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:25.557-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:27.625-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:29.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:31.816-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:33.929-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:36.174-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:38.597-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:41.165-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:43.436-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:45.821-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:48.639-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:50.900-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:53.518-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:56.955-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:39:59.906-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:03.954-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:06.465-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:08.607-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:11.371-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:14.131-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:17.687-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:21.017-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:24.458-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:26.657-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:28.870-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:30.953-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:33.280-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:36.697-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:39.674-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:41.984-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:44.563-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:46.785-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:49.150-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:51.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:54.014-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:56.328-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:40:58.744-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:00.963-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:03.488-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:05.808-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:08.229-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:10.296-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:12.435-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:14.775-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:17.146-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:19.239-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:21.461-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:23.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:25.720-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:27.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:29.724-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:31.867-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:33.987-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:36.007-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:38.068-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:40.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:42.018-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:44.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:46.071-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:48.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:50.504-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:52.469-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:54.528-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:56.708-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:41:58.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:00.819-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:03.234-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:05.201-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:07.389-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:09.796-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:11.932-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:14.024-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:16.161-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:18.552-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:20.576-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:22.653-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:24.732-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:26.842-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:28.865-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:31.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:33.702-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:35.759-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:38.269-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:40.442-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:42.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:44.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:46.700-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:48.620-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:50.680-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:53.098-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:55.693-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:42:57.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:00.248-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:04.280-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:06.666-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:08.839-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:11.052-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:13.166-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:16.021-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:18.221-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:20.446-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:22.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:24.958-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:27.226-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:29.878-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:33.198-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:35.247-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:39.164-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:41.597-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:43.696-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:45.965-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:48.203-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:50.366-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:52.727-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:55.145-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:43:57.807-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:00.371-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:03.792-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:05.923-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:08.244-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:11.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:13.946-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:16.882-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:19.067-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:21.857-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:24.171-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:26.343-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:29.618-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:31.787-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:34.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:36.323-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:38.899-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:41.091-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:43.825-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:46.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:48.063-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:51.751-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:54.145-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:56.346-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:44:58.526-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:01.949-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:04.009-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:07.507-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:09.735-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:12.310-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:14.295-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:17.482-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:19.452-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:21.542-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:24.149-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:26.499-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:28.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:31.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:35.271-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:37.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:40.702-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:42.606-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:44.379-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:47.041-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:48.971-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:50.984-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:52.971-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:55.155-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:57.257-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:45:59.410-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:01.971-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:04.185-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:06.653-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:08.707-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:10.930-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:13.246-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:15.548-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:18.020-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:20.599-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:23.135-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:26.499-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:28.657-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:30.889-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:33.465-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:35.527-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:38.552-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:40.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:42.983-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:45.137-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:47.272-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:49.540-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:52.206-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:54.430-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:56.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:46:58.944-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:01.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:04.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:06.372-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:10.054-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:12.449-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:14.546-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:16.832-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:21.018-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:23.111-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:25.883-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:27.764-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:30.021-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:33.832-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:37.363-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:40.332-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:43.970-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:46.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:48.609-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:50.944-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:53.088-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:55.120-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:47:57.511-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:00.077-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:02.684-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:05.121-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:07.535-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:11.371-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:14.638-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:17.080-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:20.426-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:23.109-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:25.560-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:28.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:31.450-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:34.530-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:36.929-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:39.104-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:41.137-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:44.721-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:46.906-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:49.407-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:52.539-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:55.649-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:48:58.399-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:01.272-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:03.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:06.979-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:08.989-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:11.628-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:14.120-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:15.952-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:19.237-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:22.081-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:23.962-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:26.000-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:28.124-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:30.399-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:32.460-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:34.761-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:37.052-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:38.872-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:40.908-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:42.813-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:44.825-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:46.682-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:48.739-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:50.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:52.693-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:54.624-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:56.758-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:49:58.642-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:00.474-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:03.608-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:05.511-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:07.687-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:09.707-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:12.249-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:14.428-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:16.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:18.542-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:21.206-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:23.530-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:25.711-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:28.043-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:30.010-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:34.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:36.401-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:38.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:41.512-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:43.386-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:47.050-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:49.173-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:51.302-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:53.304-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:55.562-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:50:58.232-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:00.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:04.584-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:07.295-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:09.578-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:12.260-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:14.636-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:18.611-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:20.826-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:24.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:27.724-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:29.977-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:32.481-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:34.850-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:37.222-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:41.312-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:44.011-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:46.734-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:49.781-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:52.421-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:56.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:51:58.744-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:01.765-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:04.712-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:08.787-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:10.932-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:13.186-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:15.155-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:17.498-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:20.343-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:22.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:25.467-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:27.761-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:30.107-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:32.198-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:34.728-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:36.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:38.997-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:41.101-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:43.292-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:45.517-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:47.522-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:49.718-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:51.875-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:54.058-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:56.238-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:52:58.431-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:00.592-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:03.331-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:05.639-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:07.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:09.814-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:11.794-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:13.974-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:15.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:17.844-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:19.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:21.786-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:23.833-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:25.889-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:27.887-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:30.017-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:32.123-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:34.067-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:36.086-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:38.020-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:39.973-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:42.036-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:43.936-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:46.015-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:48.282-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:50.443-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:52.657-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:54.802-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:57.301-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:53:59.647-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:01.872-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:04.171-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:06.138-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:08.158-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:10.185-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:12.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:14.389-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:16.375-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:18.698-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:21.048-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:23.369-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:25.366-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:27.464-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:29.632-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:31.807-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:33.912-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:36.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:39.642-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:41.871-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:44.220-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:46.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:49.314-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:52.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:54.999-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:54:58.937-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:00.888-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:03.677-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:05.532-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:07.774-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:09.623-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:11.689-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:13.939-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:15.986-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:18.007-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:19.875-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:21.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:23.816-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:25.781-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:27.856-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:29.706-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:31.591-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:33.579-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:35.593-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:37.663-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:39.700-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:42.000-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:43.779-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:45.748-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:47.983-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:50.537-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:52.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:54.552-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:56.927-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:55:59.003-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:01.227-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:03.478-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:05.736-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:07.633-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:09.691-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:11.902-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:13.920-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:16.261-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:18.446-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:20.651-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:22.968-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:24.991-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:27.352-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:29.708-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:31.813-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:34.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:36.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:39.048-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:41.353-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:43.628-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:45.645-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:47.948-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:50.084-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:52.337-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:54.815-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:57.196-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:56:59.214-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:01.579-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:03.606-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:05.736-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:08.096-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:10.547-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:13.260-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:17.392-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:20.107-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:23.103-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:25.236-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:27.395-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:29.834-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:31.761-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:34.045-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:36.303-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:38.679-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:40.676-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:42.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:45.090-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:47.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:49.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:51.556-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:54.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:55.981-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:57:58.173-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:00.473-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:03.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:04.921-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:06.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:08.860-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:10.986-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:13.016-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:15.060-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:17.072-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:19.156-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:21.040-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:23.085-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:25.043-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:26.920-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:29.047-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:31.196-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:33.337-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:35.467-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:37.524-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:39.575-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:41.474-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:43.931-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:46.042-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:48.329-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:50.595-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:52.791-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:54.645-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:56.753-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:58:58.908-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:01.226-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:03.271-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:05.559-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:07.738-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:09.838-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:12.056-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:14.393-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:16.759-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:19.725-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:22.126-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:23.589-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-396","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T15:59:24.298-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:28.076-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:30.425-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:33.067-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:35.238-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:38.405-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:40.532-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:42.784-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:44.788-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:47.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:49.134-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:51.624-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:53.807-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T15:59:58.015-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:00.233-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"ThreadPoolTaskScheduler-1","class":"org.camunda.optimize.service.identity.UserIdentityCacheService","method":"syncIdentitiesWithRetry","message":"Engine user identity sync complete","line":"114"} {"timestamp":"2022-02-17T16:00:00.849-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:04.129-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:06.566-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:08.871-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:12.251-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:14.449-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:18.252-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:20.463-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:23.377-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:25.851-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:28.604-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:30.829-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:33.450-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:35.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:38.387-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:40.819-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:43.085-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:45.473-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:47.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:49.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:52.153-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:54.380-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:56.537-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:00:58.714-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:02.379-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:04.644-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:08.617-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:10.808-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:12.809-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:14.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:16.651-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:18.456-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:20.430-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:22.397-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:24.447-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:26.563-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:28.718-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:30.891-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:32.933-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:34.744-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:36.782-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:38.738-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:40.653-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:42.677-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:44.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:46.546-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:48.583-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:50.628-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:52.627-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:54.549-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:56.742-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:01:59.145-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:01.194-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:03.422-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:05.383-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:07.431-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:09.737-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:12.080-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:14.157-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:16.527-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:18.822-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:21.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:23.382-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:26.102-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:30.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:32.990-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:35.104-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:37.362-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:39.663-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:42.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:44.656-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:47.264-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:49.361-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:51.356-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:53.616-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:55.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:57.839-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:02:59.747-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:01.810-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:04.054-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:06.094-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:08.351-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:11.179-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:13.725-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:16.627-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:19.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:21.034-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:23.292-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:25.704-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:28.028-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:30.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:32.492-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:34.603-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:36.699-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:38.754-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:40.910-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:43.111-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:45.138-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:47.249-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:49.788-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:51.942-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:54.199-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:56.530-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:03:58.850-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:01.236-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:04.599-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:07.397-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:11.376-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:13.571-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:15.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:17.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:19.810-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:22.215-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:24.353-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:26.282-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:28.180-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:30.381-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:32.449-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:34.504-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:36.473-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:38.708-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:40.983-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:42.966-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:44.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:46.707-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:48.584-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:50.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:52.400-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:54.319-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:56.690-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:04:58.878-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:01.651-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:05.690-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:07.769-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:10.046-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:12.430-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:14.732-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:16.673-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:18.811-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:20.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:23.367-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:25.759-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:28.013-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:30.137-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:32.350-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:34.285-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:36.371-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:38.431-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:40.922-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:42.937-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:45.239-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:47.082-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:49.309-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:51.588-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:53.812-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:56.226-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:05:58.522-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:01.409-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:03.662-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:05.911-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:08.193-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:10.977-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:14.084-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:16.248-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:18.581-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:20.952-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:23.421-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:25.613-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:27.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:29.430-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:32.206-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:34.328-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:36.598-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:38.942-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:40.961-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:42.766-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:44.749-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:46.706-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:48.942-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:50.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:53.267-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:55.454-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:06:57.886-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:00.232-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:02.877-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:04.813-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:07.022-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:09.336-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:11.524-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:13.690-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:15.826-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:18.139-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:20.244-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:22.599-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:24.814-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:26.854-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:29.228-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:31.698-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:33.957-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:36.117-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:38.567-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:41.095-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:43.531-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:46.530-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:48.802-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:51.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:53.947-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:07:58.291-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:00.543-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:03.174-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:05.433-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:07.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:12.060-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:14.348-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:16.435-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:18.685-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:22.864-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:25.230-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:27.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:31.048-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:34.531-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:36.954-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:38.991-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:41.250-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:43.546-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:46.574-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:50.093-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:52.595-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:54.536-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:56.650-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:08:58.684-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:02.517-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:04.777-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:06.873-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:09.029-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:11.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:13.562-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:15.346-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:17.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:19.539-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:22.079-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:24.712-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:26.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:30.205-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:32.604-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:34.588-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:36.756-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:38.733-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:41.096-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:43.360-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:46.219-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:49.484-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:51.536-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:53.612-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:55.900-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:09:58.285-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:00.636-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:04.723-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:07.350-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:09.655-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:11.958-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:15.368-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:18.411-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:21.013-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:25.036-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:27.138-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:29.362-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:32.385-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:36.275-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:38.362-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:41.676-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:44.143-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:46.797-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:48.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:52.187-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:55.566-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:10:58.071-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:00.087-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:03.524-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:05.996-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:09.877-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:12.236-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:14.372-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:16.229-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:18.448-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:20.388-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:22.757-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:25.049-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:27.771-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:30.151-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:32.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:34.741-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:37.254-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:39.396-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:41.762-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:43.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:45.851-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:48.277-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:50.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:52.803-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:55.192-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:11:58.064-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:00.304-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:02.867-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:05.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:07.832-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:10.104-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:12.182-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:16.044-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:18.084-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:20.416-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:22.794-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:26.702-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:28.885-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:31.478-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:34.058-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:36.298-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:38.541-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:40.600-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:42.897-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:45.081-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:47.296-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:49.401-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:51.507-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:53.520-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:55.887-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:12:57.812-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:00.182-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:02.610-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:04.651-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:06.754-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:08.974-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:11.014-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:13.037-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:14.833-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:16.784-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:18.818-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:20.711-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:23.123-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:25.160-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:27.381-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:29.494-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:32.097-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:34.510-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:36.666-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:38.608-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:42.689-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:44.381-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-448","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:13:44.919-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-417","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:13:45.037-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:47.248-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:49.656-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:52.128-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:54.456-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:13:58.138-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:00.518-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:03.352-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:05.814-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:09.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:12.523-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:15.823-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:18.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:20.629-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:23.860-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:26.726-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:30.021-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:32.818-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:34.870-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:38.492-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:40.595-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:42.791-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:45.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:47.407-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:50.281-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:52.630-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:54.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:14:57.404-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:00.981-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:04.586-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:07.902-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:11.159-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:13.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:15.769-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:18.040-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:20.147-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:22.397-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:26.340-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:28.642-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:30.971-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:33.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:36.797-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:39.335-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:41.703-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:43.896-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:46.067-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:48.654-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:51.031-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:53.375-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:55.722-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:15:57.974-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:00.198-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:02.895-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:05.108-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:07.439-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:09.611-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:11.954-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:13.941-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:16.352-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:18.572-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:20.909-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:23.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:25.190-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:27.475-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:29.867-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:32.248-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:34.365-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:36.445-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:38.783-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:41.206-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:43.485-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:46.520-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:49.289-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:52.204-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:54.855-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:16:58.444-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:02.567-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:04.756-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:08.562-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:11.880-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:14.408-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:16.569-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:19.731-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:21.917-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:24.515-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:27.266-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:30.424-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:32.777-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:35.010-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:37.521-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:39.752-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:42.200-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:44.914-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:47.033-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:48.890-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:51.098-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:54.577-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:56.761-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:17:59.027-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:01.332-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:03.696-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:06.185-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:08.279-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:11.325-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:15.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:17.141-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:18.956-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:21.321-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:23.618-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:25.887-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:28.423-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:30.536-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:33.603-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:36.403-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:39.150-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:42.632-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:45.110-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:47.911-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:51.135-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:53.588-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:55.989-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:18:58.300-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:00.409-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:03.626-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:06.184-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:08.633-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:11.270-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:13.876-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:16.658-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:18.835-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:20.973-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:24.241-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:26.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:27.867-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:29.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:31.769-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:33.865-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:35.838-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:38.052-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:39.946-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:42.004-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:44.190-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:46.378-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:48.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:50.697-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:52.839-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:55.133-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:19:57.480-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:00.907-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:04.002-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:05.986-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:09.645-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:12.067-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:14.759-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:17.331-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:19.644-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:22.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:24.780-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:27.515-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:29.653-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:33.515-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:33.982-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-459","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:20:34.107-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-454","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:20:35.975-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:39.423-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:41.595-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:43.881-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:45.999-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:48.218-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:50.325-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:52.390-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:54.345-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:56.380-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:20:58.478-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:00.374-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:03.162-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:05.137-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:07.192-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:09.043-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:11.002-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:12.990-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:15.095-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:17.054-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:18.840-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:20.900-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:22.997-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:24.837-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-396","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:21:24.994-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-464","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:21:25.030-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:25.117-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-471","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:21:26.884-06:00","level":"INFO","service":"camunda-optimize","serviceId":"20088","thread":"qtp1871678080-472","class":"com.mgic.ret.camunda.optimize.auth.UserHeaderAuthenticationExtractor","method":"extractAuthenticatedUser","message":"Logged UserId: 'retAdmin'","line":"24"} {"timestamp":"2022-02-17T16:21:27.271-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:29.904-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:32.050-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:34.029-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:36.354-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:38.634-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:40.812-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:42.945-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:45.607-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:47.839-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:50.297-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:52.767-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:55.063-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:21:58.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:00.236-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:03.118-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:06.213-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:08.756-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:12.231-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:14.494-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:16.829-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:19.438-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:22.785-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:26.169-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:28.346-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:30.573-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:33.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:35.181-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:37.541-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:40.452-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:42.776-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:44.967-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:47.107-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:49.229-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:51.372-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:53.811-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:56.237-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:22:58.576-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:00.923-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:03.669-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:05.608-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:07.950-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:09.907-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:11.913-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:14.668-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:16.917-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:19.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:23.381-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:26.385-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:29.041-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:31.853-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:34.270-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:36.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:41.206-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:45.376-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:47.954-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:50.518-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:52.834-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:23:58.432-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:00.867-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:03.238-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:05.916-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:08.267-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:10.196-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:12.463-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:14.654-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:17.069-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:19.140-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:21.380-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:23.584-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:25.909-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:28.610-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:30.960-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:33.240-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:35.411-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:37.782-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:39.841-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:41.886-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:43.833-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:45.803-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:47.718-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:49.635-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:51.643-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:53.704-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:55.659-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:57.921-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:24:59.928-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:02.337-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:04.430-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:06.368-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:08.252-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:10.256-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:12.102-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:14.011-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:15.889-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:18.108-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:20.306-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:22.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:24.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:26.748-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:29.009-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:30.850-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:32.770-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:34.763-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:36.793-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:38.950-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:41.082-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:43.342-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:45.497-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:47.871-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:50.436-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:52.750-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:55.547-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:25:58.624-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:01.524-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:04.576-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:06.978-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:09.377-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:12.043-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:14.562-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:16.941-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:19.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:22.120-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:24.199-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:26.217-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:28.218-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:30.603-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:33.030-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:35.017-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:38.171-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:40.621-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:44.235-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:46.495-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:48.627-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:51.058-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:54.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:56.258-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:26:58.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:00.987-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:03.200-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:05.225-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:07.460-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:10.581-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:12.612-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:14.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:16.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:18.395-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:20.669-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:22.822-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:25.092-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:27.209-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:29.758-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:32.207-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:34.648-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:36.918-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:39.837-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:41.714-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:44.115-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:46.687-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:49.874-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:52.976-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:55.516-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:27:59.159-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:01.753-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:04.226-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:05.973-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:07.959-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:09.961-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:11.915-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:13.867-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:15.754-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:17.803-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:19.672-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:21.576-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:23.568-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:25.832-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:27.615-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:29.658-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:31.692-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:33.612-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:35.491-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:37.317-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:39.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:41.270-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:43.133-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:45.058-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:47.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:49.184-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:51.223-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:53.284-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:55.233-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:57.243-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:28:59.262-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:01.329-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:03.578-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:05.587-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:07.804-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:09.838-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:11.839-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:14.262-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:16.497-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:18.597-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:20.707-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:22.921-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:25.213-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:28.320-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:30.550-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:32.483-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:34.808-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:36.985-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:39.057-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:41.175-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:43.157-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:45.109-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:47.432-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:49.480-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:51.684-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:53.836-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:56.093-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:29:58.306-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:00.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:03.522-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:05.381-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:07.391-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:09.314-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:11.858-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:14.308-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:16.549-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:19.129-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:21.657-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:24.041-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:27.554-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:29.589-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:31.670-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:33.761-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:36.187-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:38.357-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:40.560-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:43.152-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:46.737-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:49.115-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:51.507-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:54.708-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:30:57.409-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:00.308-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:03.633-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:06.771-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:09.309-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:12.159-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:16.012-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:18.045-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:20.442-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:22.878-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:25.801-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:27.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:30.561-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:33.457-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:35.564-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:39.113-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:41.400-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:43.497-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:45.596-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:47.741-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:49.733-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:51.809-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:53.987-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:56.189-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:31:58.398-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:00.466-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:03.230-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:05.303-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:07.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:09.817-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:12.033-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:14.089-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:16.600-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:19.337-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:21.542-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:23.933-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:27.243-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:30.981-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:33.287-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:36.675-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:39.635-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:43.360-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:45.538-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:47.763-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:49.905-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:53.713-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:55.921-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:32:59.252-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:02.090-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:04.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:07.458-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:09.555-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:12.341-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:14.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:18.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:20.713-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:22.773-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:24.851-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:29.163-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:32.405-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:34.515-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:38.142-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:40.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:43.773-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:46.096-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:48.163-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:50.395-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:52.818-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:55.229-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:33:58.701-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:00.988-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:04.155-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:07.623-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:10.660-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:12.681-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:15.015-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:17.800-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:20.496-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:23.882-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:26.590-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:29.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:32.001-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:34.724-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:37.384-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:39.968-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:42.401-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:44.449-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:46.955-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:50.904-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:53.227-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:55.121-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:57.632-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:34:59.795-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:03.937-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:06.454-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:08.818-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:11.371-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:13.763-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:16.043-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:21.239-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:23.503-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:26.314-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:28.915-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:31.298-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:33.580-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:36.801-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:39.917-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:43.791-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:46.465-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:49.244-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:51.527-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:53.806-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:55.831-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:35:58.261-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:00.324-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:02.859-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:04.694-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:08.518-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:10.732-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:12.902-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:15.211-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:18.004-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:20.353-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:24.228-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:26.406-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:30.006-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:32.013-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:33.918-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:35.982-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:38.193-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:40.264-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:42.797-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:44.718-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:46.757-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:48.673-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:50.583-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:52.741-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:54.993-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:56.796-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:36:58.846-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:00.969-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:03.361-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:07.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:09.568-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:11.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:14.006-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:16.281-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:18.729-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:20.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:22.509-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:24.303-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:26.476-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:28.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:30.504-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:32.769-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:34.918-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:37.023-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:39.329-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:41.479-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:43.557-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:45.552-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:47.782-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:50.288-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:52.690-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:55.245-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:37:57.845-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:00.369-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:02.859-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:05.033-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:08.758-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:11.539-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:13.589-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:15.719-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:17.964-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:19.998-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:22.114-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:24.274-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:26.640-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:28.770-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:31.127-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:33.645-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:35.949-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:38.062-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:40.062-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:42.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:44.670-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:47.163-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:49.294-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:53.120-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:55.350-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:38:57.612-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:00.052-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:03.190-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:05.390-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:09.070-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:11.789-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:14.868-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:16.973-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:19.917-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:23.127-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:25.382-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:27.647-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:29.770-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:32.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:34.459-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:36.905-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:39.390-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:41.890-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:45.719-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:47.790-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:50.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:52.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:55.179-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:57.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:39:59.715-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:02.893-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:04.845-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:07.531-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:10.673-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:12.729-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:14.897-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:17.027-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:19.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:22.367-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:24.631-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:27.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:30.180-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:32.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:34.484-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:36.756-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:38.715-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:40.661-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:42.673-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:44.803-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:47.392-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:49.436-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:51.686-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:53.801-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:56.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:40:58.462-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:01.037-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:03.400-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:06.071-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:08.191-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:10.522-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:12.879-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:14.852-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:16.881-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:18.767-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:20.778-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:22.826-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:24.804-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:26.812-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:29.052-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:31.311-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:33.216-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:35.389-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:37.433-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:40.262-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:42.383-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:44.767-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:47.635-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:50.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:52.757-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:56.720-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:41:59.716-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:02.320-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:04.533-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:07.762-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:10.009-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:12.462-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:15.112-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:17.129-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:19.028-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:21.249-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:23.372-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:25.684-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:28.217-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:30.467-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:32.728-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:34.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:38.908-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:42.535-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:44.933-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:47.565-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:50.502-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:54.252-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:56.439-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:42:59.229-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:01.971-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:04.192-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:07.215-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:09.287-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:12.073-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:16.390-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:18.731-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:21.767-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:23.829-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:26.389-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:28.597-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:30.942-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:34.206-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:36.466-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:38.511-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:40.785-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:43.155-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:46.424-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:48.904-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:51.102-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:53.853-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:56.075-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:43:58.419-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:01.196-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:03.754-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:06.835-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:09.558-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:12.059-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:14.924-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:18.099-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:21.720-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:23.903-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:26.653-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:29.335-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:31.771-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:33.743-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:36.037-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:38.526-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:42.422-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:44.474-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:48.557-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:51.559-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:55.118-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:57.122-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:44:59.856-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:04.112-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:06.242-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:09.213-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:11.492-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:13.511-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:15.798-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:18.019-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:20.251-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:22.536-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:24.739-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:26.950-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:29.358-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:31.527-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:33.512-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:36.274-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:38.450-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:40.537-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:42.765-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:44.856-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:46.925-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:48.794-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:51.050-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:53.298-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:55.088-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:57.650-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:45:59.703-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:02.449-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:04.617-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:06.849-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:09.216-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:11.326-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:14.606-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:17.668-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:20.033-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:22.818-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:25.237-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:27.802-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:30.145-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:32.247-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:34.667-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:37.398-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:39.397-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:41.600-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:44.008-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:46.428-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:48.536-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:50.813-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:53.192-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:55.119-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:56.972-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:46:58.890-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:01.374-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:04.025-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:06.275-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:08.514-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:10.452-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:12.637-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:14.687-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:16.556-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:18.489-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:20.481-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:22.408-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:24.386-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:26.241-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:28.231-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:30.492-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:32.388-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:34.474-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:36.467-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:38.711-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:40.921-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:42.968-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:45.200-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:47.318-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:49.361-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:51.617-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:54.195-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:56.234-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:47:58.228-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:00.253-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:02.933-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:04.928-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:06.903-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:08.872-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:10.830-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:12.943-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:14.997-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:19.255-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:21.478-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:24.300-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:26.663-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:29.160-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:31.037-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:33.295-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:35.684-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:38.004-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:40.442-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:43.572-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:45.749-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:47.913-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:50.808-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:53.065-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:54.996-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:57.399-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:48:59.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:01.487-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:04.205-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:06.530-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:08.556-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:11.017-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:13.305-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:18.347-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:20.506-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:22.994-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:25.082-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:27.408-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:29.551-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:31.606-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:33.781-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:36.374-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:38.619-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:40.834-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:43.443-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:45.851-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:48.642-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:50.802-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:52.935-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:55.139-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:57.309-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:49:59.202-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:01.553-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:04.418-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:06.486-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:08.643-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:11.115-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:13.364-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:15.574-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:20.363-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:22.440-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:24.552-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:26.746-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:29.012-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:31.042-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:33.148-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:35.238-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:37.058-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:38.900-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:40.842-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:42.891-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:44.756-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:46.652-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:48.864-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:51.106-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:53.334-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:55.554-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:50:58.005-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:02.639-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:04.938-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:07.063-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:10.402-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:12.479-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:14.746-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:16.885-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:19.044-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:21.325-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:23.526-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:25.381-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:27.554-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:29.985-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:32.522-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:34.657-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:36.481-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:38.585-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:40.618-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:42.529-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:44.581-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:46.694-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:48.827-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:50.928-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:52.965-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:54.965-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:57.289-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:51:59.288-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:01.256-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:03.835-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:06.146-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:08.472-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:10.750-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:12.850-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:14.947-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:17.149-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:19.730-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:22.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:24.397-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:26.736-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:28.872-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:31.794-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:34.026-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:36.025-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:38.602-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:40.928-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:43.190-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:45.453-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:48.363-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:50.875-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:53.238-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:55.646-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:52:57.721-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:00.017-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:02.446-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:04.984-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:07.072-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:09.055-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:11.126-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:13.061-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:14.987-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:17.031-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:19.474-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:21.793-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:23.898-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:26.240-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:28.208-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:30.202-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:32.439-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:34.437-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:36.397-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:38.539-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:40.950-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:43.443-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:45.589-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:47.755-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:50.038-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:52.031-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:54.302-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:56.455-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:53:58.573-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:00.604-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:02.938-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:04.875-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:06.994-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:08.764-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:10.622-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:12.933-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:15.291-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:17.293-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:19.246-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:21.369-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:23.357-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:25.531-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:28.063-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:30.351-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:32.438-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:34.912-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"} {"timestamp":"2022-02-17T16:54:37.098-06:00","level":"ERROR","service":"camunda-optimize","serviceId":"20088","thread":"ImportJobExecutor-pool-5","class":"org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob","method":"executeImport","message":"Error while executing import to elasticsearch","line":"61","stack_trace":"org.camunda.optimize.service.exceptions.OptimizeRuntimeException: There were failures while performing bulk on Completed activity instances.\nIf you are experiencing failures due to too many nested documents, try carefully increasing the configured nested object limit (es.settings.index.nested_documents_limit). See Optimize documentation for details. Message: failure in bulk execution:\n[213]: index [optimize-process-instance-documentprocessflow_v6], type [_doc], id [08dd5a42-88c2-11ec-aee0-ee3f0ab4d1d7], message [ElasticsearchException[Elasticsearch exception [type=mapper_parsing_exception, reason=The number of nested documents has exceeded the allowed limit of [10000]. This limit can be set by changing the [index.mapping.nested_objects.limit] index level setting.]]]\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.doBulkRequest(ElasticsearchWriterUtil.java:292)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.lambda$executeImportRequestsAsBulk$3(ElasticsearchWriterUtil.java:165)\n\tat java.util.HashMap.forEach(HashMap.java:1336)\n\tat org.camunda.optimize.service.es.writer.ElasticsearchWriterUtil.executeImportRequestsAsBulk(ElasticsearchWriterUtil.java:145)\n\tat org.camunda.optimize.service.es.job.importing.CompletedActivityInstanceElasticsearchImportJob.persistEntities(CompletedActivityInstanceElasticsearchImportJob.java:41)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.executeImport(ElasticsearchImportJob.java:56)\n\tat org.camunda.optimize.service.es.job.ElasticsearchImportJob.run(ElasticsearchImportJob.java:37)\n\tat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.lang.Thread.run(Thread.java:834)\n"}